Goto

Collaborating Authors

 ieee 802


Channel State Information Analysis for Jamming Attack Detection in Static and Dynamic UAV Networks -- An Experimental Study

Mykytyn, Pavlo, Chitauro, Ronald, Dyka, Zoya, Langendoerfer, Peter

arXiv.org Artificial Intelligence

--Networks built on the IEEE 802.11 standard have experienced rapid growth in the last decade. Their field of application is vast, including smart home applications, Internet of Things (IoT), and short-range high throughput static and dynamic inter-vehicular communication networks. Within such networks, Channel State Information (CSI) provides a detailed view of the state of the communication channel and represents the combined effects of multipath propagation, scattering, phase shift, fading, and power decay. In this work, we investigate the problem of jamming attack detection in static and dynamic vehicular networks. We utilize ESP32-S3 modules to set up a communication network between an Unmanned Aerial V ehicle (UA V) and a Ground Control Station (GCS), to experimentally test the combined effects of a constant jammer on recorded CSI parameters, and the feasibility of jamming detection through CSI analysis in static and dynamic communication scenarios. The rapid expansion of IEEE 802.11 networks over the past decade has revolutionized wireless communications, particularly in such applications as smart homes [1], Internet of Things (IoT) [2], industrial automation, and short-range high-throughput vehicular networks [3]. This can be contributed to their high throughput capabilities, ease of deployment, and increasingly growing demand for internet connectivity. However, the widespread usage and extensive deployment of these networks make them an attractive target for malicious actors, and thus, more exposed and susceptible to jamming attacks.


Learning Multi-Access Point Coordination in Agentic AI Wi-Fi with Large Language Models

Fan, Yifan, Liang, Le, Liu, Peng, Li, Xiao, Guo, Ziyang, Lan, Qiao, Jin, Shi, Tong, Wen

arXiv.org Artificial Intelligence

Abstract--Multi-access point coordination (MAPC) is a key technology for enhancing throughput in next-generation Wi-Fi within dense overlapping basic service sets. However, existing MAPC protocols rely on static, protocol-defined rules, which limits their ability to adapt to dynamic network conditions such as varying interference levels and topologies. T o address this limitation, we propose a novel Agentic AI Wi-Fi framework where each access point, modeled as an autonomous large language model agent, collaboratively reasons about the network state and negotiates adaptive coordination strategies in real time. This dynamic collaboration is achieved through a cognitive workflow that enables the agents to engage in natural language dialogue, leveraging integrated memory, reflection, and tool use to ground their decisions in past experience and environmental feedback. Comprehensive simulation results demonstrate that our agentic framework successfully learns to adapt to diverse and dynamic network environments, significantly outperforming the state-of-the-art spatial reuse baseline and validating its potential as a robust and intelligent solution for future wireless networks. The upcoming IEEE 802.11bn standard, or Wi-Fi 8, introduces multi-access point coordination (MAPC) as a key mechanism to enhance performance in dense Wi-Fi deployments [1]. Specifically, MAPC enables neighboring access points (APs) in overlapping basic service sets (OBSS) to jointly manage radio resources, thereby mitigating the adverse impact of co-channel interference and boosting network throughput.


AI-Enhanced Distributed Channel Access for Collision Avoidance in Future Wi-Fi 8

Pan, Jinzhe, Wang, Jingqing, Ouyang, Yuehui, Cheng, Wenchi, Zhang, Wei

arXiv.org Artificial Intelligence

The exponential growth of wireless devices and stringent reliability requirements of emerging applications demand fundamental improvements in distributed channel access mechanisms for unlicensed bands. Current Wi-Fi systems, which rely on binary exponential backoff (BEB), suffer from suboptimal collision resolution in dense deployments and persistent fairness challenges due to inherent randomness. This paper introduces a multi-agent reinforcement learning framework that integrates artificial intelligence (AI) optimization with legacy device coexistence. We first develop a dynamic backoff selection mechanism that adapts to real-time channel conditions through access deferral events while maintaining full compatibility with conventional CSMA/CA operations. Second, we introduce a fairness quantification metric aligned with enhanced distributed channel access (EDCA) principles to ensure equitable medium access opportunities. Finally, we propose a centralized training decentralized execution (CTDE) architecture incorporating neighborhood activity patterns as observational inputs, optimized via constrained multi-agent proximal policy optimization (MAPPO) to jointly minimize collisions and guarantee fairness. Experimental results demonstrate that our solution significantly reduces collision probability compared to conventional BEB while preserving backward compatibility with commercial Wi-Fi devices. The proposed fairness metric effectively eliminates starvation risks in heterogeneous scenarios.


A Hybrid TDMA/CSMA Protocol for Time-Sensitive Traffic in Robot Applications

Xu, Shiqi, Zhang, Lihao, Du, Yuyang, Yang, Qun, Liew, Soung Chang

arXiv.org Artificial Intelligence

Abstract--Recent progress in robotics has underscored the demand for real-time control in applications such as manufacturing and healthcare systems, where the timely delivery of mission-critical commands under heterogeneous robotic traffic is paramount for operational efficacy and safety. In these scenarios, mission-critical traffic follows a strict deadline-constrained communication pattern: commands must arrive within defined deadlines, otherwise late arrivals can degrade performance or destabilize control loops. In this work, we demonstrate on a real-time software-defined radio (SDR) platform that CSMA, widely adopted in robotic communications, suffers severe degradation with contention-induced collisions and delays disrupting the on-time arrival of mission-critical packets. This degradation arises under a common robotic traffic pattern where non-critical traffic dominates the channel, while lightweight mission-critical commands must be delivered frequently with strict deadlines over the shared medium. T o address this, we propose an IEEE 802.11-compatible hybrid TDMA/CSMA protocol that combines TDMA's deterministic slot scheduling with CSMA's adaptability for heterogeneous robot traffic. The protocol achieves collision-free, low-latency mission-critical command delivery and IEEE 802.11 compatibility through the synergistic integration of sub-microsecond PTP-based slot synchronization, a three-section superframe with dynamic TDMA allocation for structured and adaptable traffic management, and beacon-NA V protection to preemptively secure critical communication applications from interference. Emulation experiments on a real-time SDR testbed show that the proposed protocol reduces missed-deadline errors by 93% compared to the CSMA baseline under a robotic traffic setup at an overall aggregate channel load of 77.1%, wherein 99.9% of the traffic is from non time-critical applications and 0.1% of the traffic is from deadline-constraint applications. In a high-speed robot path-tracking Robot Operating System (ROS) simulation, the protocol lowers root mean square trajectory error by up to 90% compared with the CSMA baseline, while maintaining throughput for non-critical traffic within 2%. Robotics has undergone remarkable advancements in recent years, playing critical roles in domains such as manufacturing [1], healthcare [2]-[4], and autonomous systems [5]. Multi-robot cooperation has emerged as a key enabler for complex robotic applications that require seamless coordination among multiple devices, such as collaborative assembly [6], warehouse automation [7], and search-and-rescue missions [8]. The work was partially supported by the Shen Zhen-Hong Kong-Macao technical program (Type C) under Grant No. SGDX20230821094359004. As the number of robots grows rapidly in a multi-robot system, communications between robots are becoming increasingly data-intensive.


mmHSense: Multi-Modal and Distributed mmWave ISAC Datasets for Human Sensing

Bhat, Nabeel Nisar, Karnaukh, Maksim, Vandenbroeke, Stein, Lemoine, Wouter, Struye, Jakob, Lacruz, Jesus Omar, Kumar, Siddhartha, Moghaddam, Mohammad Hossein, Widmer, Joerg, Berkvens, Rafael, Famaey, Jeroen

arXiv.org Artificial Intelligence

Abstract--This article presents mmHSense, a set of open labeled mmWave datasets to support human sensing research within Integrated Sensing and Communication (ISAC) systems. The datasets can be used to explore mmWave ISAC for various end applications such as gesture recognition, person identification, pose estimation, and localization. Moreover, the datasets can be used to develop and advance signal processing and deep learning research on mmWave ISAC. This article describes the testbed, experimental settings, and signal features for each dataset. Furthermore, the utility of the datasets is demonstrated through validation on a specific downstream task. In addition, we demonstrate the use of parameter-efficient fine-tuning to adapt ISAC models to different tasks, significantly reducing computational complexity while maintaining performance on prior tasks. Integrated Sensing and Communication (ISAC) [1] enables communication networks to double as intelligent sensing systems, enabling advance human sensing applications. For instance, ISAC can enable Wi-Fi routers to recognize human gestures in smart-home applications [2].


Neural Network-based Vehicular Channel Estimation Performance: Effect of Noise in the Training Set

Ngorima, Simbarashe Aldrin, Helberg, Albert, Davel, Marelie H.

arXiv.org Artificial Intelligence

Vehicular communication systems face significant challenges due to high mobility and rapidly changing environments, which affect the channel over which the signals travel. To address these challenges, neural network (NN)-based channel estimation methods have been suggested. These methods are primarily trained on high signal-to-noise ratio (SNR) with the assumption that training a NN in less noisy conditions can result in good generalisation. This study examines the effectiveness of training NN-based channel estimators on mixed SNR datasets compared to training solely on high SNR datasets, as seen in several related works. Estimators evaluated in this work include an architecture that uses convolutional layers and self-attention mechanisms; a method that employs temporal convolutional networks and data pilot-aided estimation; two methods that combine classical methods with multilayer perceptrons; and the current state-of-the-art model that combines Long-Short-Term Memory networks with data pilot-aided and temporal averaging methods as post processing. Our results indicate that using only high SNR data for training is not always optimal, and the SNR range in the training dataset should be treated as a hyperparameter that can be adjusted for better performance. This is illustrated by the better performance of some models in low SNR conditions when trained on the mixed SNR dataset, as opposed to when trained exclusively on high SNR data.


Adaptive Resource Allocation Optimization Using Large Language Models in Dynamic Wireless Environments

Noh, Hyeonho, Shim, Byonghyo, Yang, Hyun Jong

arXiv.org Artificial Intelligence

Deep learning (DL) has made notable progress in addressing complex radio access network control challenges that conventional analytic methods have struggled to solve. However, DL has shown limitations in solving constrained NP-hard problems often encountered in network optimization, such as those involving quality of service (QoS) or discrete variables like user indices. Current solutions rely on domain-specific architectures or heuristic techniques, and a general DL approach for constrained optimization remains undeveloped. Moreover, even minor changes in communication objectives demand time-consuming retraining, limiting their adaptability to dynamic environments where task objectives, constraints, environmental factors, and communication scenarios frequently change. To address these challenges, we propose a large language model for resource allocation optimizer (LLM-RAO), a novel approach that harnesses the capabilities of LLMs to address the complex resource allocation problem while adhering to QoS constraints. By employing a prompt-based tuning strategy to flexibly convey ever-changing task descriptions and requirements to the LLM, LLM-RAO demonstrates robust performance and seamless adaptability in dynamic environments without requiring extensive retraining. Simulation results reveal that LLM-RAO achieves up to a 40% performance enhancement compared to conventional DL methods and up to an $80$\% improvement over analytical approaches. Moreover, in scenarios with fluctuating communication objectives, LLM-RAO attains up to 2.9 times the performance of traditional DL-based networks.


Cross-Technology Interference: Detection, Avoidance, and Coexistence Mechanisms in the ISM Bands

Kidane, Zegeye Mekasha, Dargie, Waltenegus

arXiv.org Artificial Intelligence

A large number of heterogeneous wireless networks share the unlicensed spectrum designated as the ISM (Industry, Scientific, and Medicine) radio band. These networks do not adhere to a common medium access rule and differ in their specifications considerably. As a result, when concurrently active, they cause cross-technology interference (CTI) on each other. The effect of this interference is not reciprocal, the networks using high transmission power and advanced transmission schemes often causing disproportionate disruptions to those with modest communication and computation resources. CTI corrupts packets, incurs packet retransmission cost, introduces end-to-end latency and jitter, and make networks unpredictable. The purpose of this paper is to closely examine its impact on low-power networks which are based on the IEEE 802.15.4 standard. It discusses latest developments on CTI detection, coexistence and avoidance mechanisms as well on messaging schemes which attempt to enable heterogeneous networks directly communicate with one another to coordinate packet transmission and channel assignment.


Coordinated Multi-Armed Bandits for Improved Spatial Reuse in Wi-Fi

Wilhelmi, Francesc, Bellalta, Boris, Szott, Szymon, Kosek-Szott, Katarzyna, Barrachina-Muñoz, Sergio

arXiv.org Artificial Intelligence

Multi-Access Point Coordination (MAPC) and Artificial Intelligence and Machine Learning (AI/ML) are expected to be key features in future Wi-Fi, such as the forthcoming IEEE 802.11bn (Wi-Fi 8) and beyond. In this paper, we explore a coordinated solution based on online learning to drive the optimization of Spatial Reuse (SR), a method that allows multiple devices to perform simultaneous transmissions by controlling interference through Packet Detect (PD) adjustment and transmit power control. In particular, we focus on a Multi-Agent Multi-Armed Bandit (MA-MAB) setting, where multiple decision-making agents concurrently configure SR parameters from coexisting networks by leveraging the MAPC framework, and study various algorithms and reward-sharing mechanisms. We evaluate different MA-MAB implementations using Komondor, a well-adopted Wi-Fi simulator, and demonstrate that AI-native SR enabled by coordinated MABs can improve the network performance over current Wi-Fi operation: mean throughput increases by 15%, fairness is improved by increasing the minimum throughput across the network by 210%, while the maximum access delay is kept below 3 ms.


Machine Learning & Wi-Fi: Unveiling the Path Towards AI/ML-Native IEEE 802.11 Networks

Wilhelmi, Francesc, Szott, Szymon, Kosek-Szott, Katarzyna, Bellalta, Boris

arXiv.org Artificial Intelligence

Artificial intelligence (AI) and machine learning (ML) are nowadays mature technologies considered essential for driving the evolution of future communications systems. Simultaneously, Wi-Fi technology has constantly evolved over the past three decades and incorporated new features generation after generation, thus gaining in complexity. As such, researchers have observed that AI/ML functionalities may be required to address the upcoming Wi-Fi challenges that will be otherwise difficult to solve with traditional approaches. This paper discusses the role of AI/ML in current and future Wi-Fi networks and depicts the ways forward. A roadmap towards AI/ML-native Wi-Fi, key challenges, standardization efforts, and major enablers are also discussed. An exemplary use case is provided to showcase the potential of AI/ML in Wi-Fi at different adoption stages.